COVID-19: Containing the epidemic

Maxime Cartan
27 min readApr 9, 2020

Introduction

Citalid’s job is to help companies and insurers manage the cyber risk by quantifying its frequency and the related financial losses incurred. Our software solution allows companies to build cyber risk management strategies that are both tailored and optimal in terms of efficiency and cost-effectiveness.

During the current COVID 19 crisis, some key cybersecurity vendors are providing frontline players (e.g. hospitals) with their products for free. We strongly support these initiatives. Since our product is designed for decision-makers to deter crisis and prepare for them, instead of dealing with them, we thought about how else we could help.

We happen to help cyber insurers quantify the financial exposure of their portfolio of insured companies. One of the greatest issues faced by insurers is the systemic nature of cyber risk, which can quickly lead to pooling failures and ruins. In this context, Citalid has built stochastic models to simulate the spread of malware outbreaks, such as Wannacry and NotPetya in 2017.

Given the similarities between computer and biological viruses, we decided to build a simple model whose sole purpose is to assess the efficiency of possible containment strategies to fight the SARS-CoV-2 virus.

The main difference is that we are neither epidemiologists nor physicians: the present article is intended only for raising awareness and share our methods through a specific use case. The approach developed below is derived from different research studies and should be strictly restricted to pedagogical simulations. On the other hand, in cybersecurity, our simulations rely upon cutting-edge models we have been developing for years, which are enlightened by our threat intelligence expertise and our proprietary database.

N.B.: the article is also available in French here. Also, if you are eager to jump directly to conclusions without reading the details of the model, you may click here (SPOILER ALERT!).

1. Modeling the spread of COVID-19

1.1 General approach

The key insight proposed here is to model the social relationships inside a given population with graphs. A node stands for an individual whereas an edge, linking two nodes, stands for a social relationship. Social relationships being bilateral, the edges are undirected.

In the initial state, a given number of random individuals within the population is assumed to be infected (nodes marked as red below). Every day, the infected people come across their acquaintances (i.e. the other nodes linked to them), who thus have a given probability of being exposed to the virus and eventually become infected as well. As the simulation runs, the epidemic propagates within the population.

1.2. Modeling the epidemic

Let’s describe the epidemic modeling in greater detail. We have chosen to work with the commonly implemented SEIR model, which is a special case of what is called the Kermack-McKendrick theory. This model divides the population into four categories:

  • Susceptible individuals, who have not been exposed to the virus;
  • Exposed individuals, who incubate the virus without symptom and who are assumed not to be contagious;
  • Infected individuals, who are contagious and usually symptomatic;
  • Recovered individuals, who have been infected but are now cured and immune to the virus.

We have added two other categories to this model:

  • Quarantined individuals, who are placed in full lockdown until their recovery or death and thus do not risk infecting anyone else. We assume that a given number of virus detection tests are carried out among the individuals with symptoms close to the COVID-19 ones, and that each test has a probability p[quarantine] to reveal that the individual has been exposed to the virus, the tested individual being then quarantined;
  • Dead individuals, who are a result of the infection fatality rate of the epidemic — we assume that death happens at the end of the infection period with a probability p[fatality], an infected individual thus having a recovery probability of 1 - [pfatality].

N.B: Be careful not to confuse the “infection fatality rate”, which is the proportion of deaths compared to the total number of infected people, with the “case fatality rate”, which is the proportion of deaths compared to the total number of diagnosed people. In the case of COVID-19, not all infected people can be diagnosed, thus the case fatality rate is higher than the infection fatality rate, which we consider here.

The parameters of the epidemic are the following:

  • The initial number of infected individuals, drawn at random at the start of the simulation.
  • The incubation time, which varies along individuals. We draw a specific incubation time value for each individual according to a chosen probability distribution. A probability distribution represents the probabilities of occurrence of every possible incubation time value in our draws. The probability distribution is chosen such that the mean incubation time is 5 days.
  • The infection time, which again varies along individuals. We draw a specific infection time value for each individual according to a chosen probability distribution. The probability distribution is chosen such that the mean infection time is 8 days.
  • The infection fatality rate (p[fatality]), as discussed above, varies with age, health, lifestyle, etc. We draw a specific infection fatality rate value for each individual according to a chosen probability distribution. The probability distribution is chosen such that the mean fatality rate is 2%.

N.B.: we relied on this French actuarial thesis to choose the probability distributions. As explained, Gamma laws are the most widely used when it comes to modeling temporal phenomena, making them ideal candidates to model both the incubation time and the infection time. Regarding fatality rate, we used a log-normal distribution because of its high tail allowing for a few extreme values with significant probabilities. This is for example the law used by Swiss Re to model the rate of excess mortality, calibrated by weighting data from several 20th century epidemics in the United States.

1.3. Modeling population and social ties

In order to model social relationships within a population in a representative way, we generate a random graph.

Here are two commonly used techniques for generating random graphs representing real-world networks.

  • The Barabási–Albert model generates graphs in which the distribution of the number of edges that a given node is connected to converges to a power law (so-called “scale-free networks”). In other words, most nodes are connected to the same number of nodes, but a few nodes are connected to a lot of other nodes — following the “rich people get richer” paradigm. Such a phenomenon is very common in online social networks for instance, and accounts for “super spreaders” in our case — individuals who potentially cross the path of a lot of other individuals and risk transmitting the virus to many of them.
  • The Watts–Strogatz model generates graphs with small-world properties, including high clustering and short average path lengths. In other words, there are local communities (e.g. groups of friends) and the number of edges to follow to reach any given node is quite short in average. This latter property is often mentioned about online social networks, when one says that all people are at most six social connections away from each other.

None of the above models is fully satisfying on its own to represent a real-world population.

Therefore, we decided to mix them. Our idea is the following:

  • First, we generate a Watts-Strogatz (i.e. small-world) graph G[WS], which represents the acquaintances that the nodes encounter on a regular basis, e.g. colleagues & friends. Indeed, a person often comes across the same people every day.
  • Among the acquaintances of each node in G[WS], we tag a small number of nodes as being family members. The family members are encountered more often and for a longer time, resulting in a higher transmission probability.
  • Finally, we generate a Barabási-Albert (i.e. “super spreader”) graph G[BA], which represents the unknown people each of us comes across every day, for example in public places, supermarkets, public transport, etc. People living in big cities for instance are particularly likely to bump into a lot of people every day.

Every day, a given individual:

  • Spends a lot of time with a small number of people, who are tagged as family members among the individual’s neighbors in the GWS] graph. Should the individual be infected, the family members in turn risk being exposed to the virus with a probability: p[family].
  • Spends less time with a bigger number of acquaintances, who correspond to the individual’s neighbors in the G[WS] graph. Should the individual be infected, the acquaintances in turn risk being exposed to the virus with a probability:
    p[acquaintances] < p[family].
  • Comes across a large number of unknown people in public places, who correspond to the individual’s neighbors in the G[BA] graph. Should the individual be infected, the unknown people in turn risk being exposed to the virus with a probability:
    p[unknown] << p[acquaintances] < p[family].

1.4. Modeling lockdown strategies

Our final goal is to assess the efficiency of different containment strategies.

Lockdown strategies, as currently enforced by most governments, aim to enforce social distancing between people for a certain period of time, hoping to slow down or fully contain the epidemic.

In order to simulate the impacts of lockdown strategies on the population, we designed the following principles, assuming that during lockdown:

  • People lock themselves down with their family, that is to say with the family members tagged in the G[WS] graph. They spend a lot more time with their family than without lockdown.
    We assume that during lockdown, any two nodes of a given family are connected by an edge (i.e. all members of a same family spend time with each other), and that two nodes of different families are not connected by any edges (i.e. there is no more social relationships between different families). Therefore, the lockdown leads to a partition of the G[WS] graph in many isolated small sets of nodes. If an individual is infected, the transmission probability, that is to say the probability to expose in turn one of the family members to the virus, gets higher than without lockdown:
    p[family_when_lockdown] = K[family_when_lockdown] * p[family], with K[family_when_lockdown] > 1.
  • People stop seeing their colleagues & friends, that is to say their neighbors in the GWS graph who are not tagged as family members.
    Therefore, an infected individual does not risk transmitting the virus to acquaintances who are not family members: p[acquaintances_when_lockdown] = 0. This assumption is optimistic given what is actually observed in populations where lockdowns are implemented, but is hopefully close to reality: #stayathome.
  • Most people need to keep on going to public places like supermarkets, where they come across unknown people. However, they do so much less often and bump into far fewer people than without lockdown.
    Therefore, an infected individual risks exposing unknown people to the virus with a decreased transmission probability:
    p[unknown_when_lockdown] = K[unknown_when_lockdown] * p[unknown], with K[unknown_when_lockdown] < 1.

2. Running simulations and assessing the effectiveness of different containment strategies

Having modeled the epidemic, population and lockdown strategies, we will now run simulations to assess the effects of different courses of action on the spread of the virus.

Running many simulations on large graphs requires huge computing power. Therefore, we first determined the most promising strategies by running a large number of simulations on small graphs, with algorithms optimized to perform the simulations in a reasonable amount of time. Having decided the optimal-looking course of action, we then checked that it scaled up well with large graphs.

2.1. Small graphs with Monte-Carlo simulations

The best method to run simulations and aggregate actionable results when there is a great deal of uncertainty on the input parameters is to use Monte-Carlo simulations. This is what our algorithms do to quantify the financial exposure of our clients to cyber risk, and we believe it to be also appropriate in the case of the COVID-19 analysis.

The main idea of the Monte Carlo method applied here is to run a lot of simulations with different initial states (e.g. individuals initially infected) to get a representative coverage of all possible situations, aggregate the results and compute statistical metrics reflecting the spread of the epidemic. Remember that the model assigns different parameters (social relationships, incubation and infection times, fatality rate, etc.) to every node.

To get an idea of how the initial state of infected individuals impacts the simulation, you may watch the two videos below. The videos correspond to simulations run with the same parameters but with two different sets of 3 individuals initially infected. In the first simulation, the epidemic spreads slowly and stops around day 60, whereas in the second one it spreads within almost the whole population and stops only around day 160.

For computing power reasons, we ran the simulations on small graphs representing a population of 100 individuals. The idea is to test a small sample of the population hundreds of times to study the characteristics of its general behavior with regard to the virus.

Our goal is to test the effect of different lockdown and quarantine strategies against the virus. A strategy consists in a specific tuning of the following parameters:

  • the lockdown duration, in weeks;
  • the number of virus detection tests carried out daily among the individuals with symptoms close to the COVID-19 ones, along with the quarantine probability p[quarantine] for each test to detect COVID-19 and result in a quarantine of the tested individual;
  • the strategy threshold, which corresponds to the number of infected individuals required before the strategy (lockdown, quarantine or both) is enforced.

For each tested combination of these 3 parameters, we run 100 simulations with 3 different initially infected individuals. We chose to keep the following parameters the same for each simulation:

  • Total population size: 100;
  • Mean number of family members per individual: 2;
  • Lockdown factor applied to the transmission probability for family members: K[family_when_lockdown] = 5;
  • Mean number of acquaintances per individual (incl. family): 5;
  • Mean number of unknown people by individual: 30;
  • Lockdown factor applied to the transmission probability for unknown individuals: K[unknown_when_lockdown] = 1/3.

We also compute the transmission probability so that each infected individual in average exposes three other individuals without lockdown, ensuring that:

  • p[acquaintances] = p[family] / 3;
  • p[unknown] = p[family] / 10.

2.1.1. Worst-case simulation: no lockdown — no quarantine

Let us start by simulating the worst case: no lockdown — no quarantine enforced. This simulation corresponds to a scenario where no measures are taken to contain the epidemic. In order to better visualize what happens, here is a video of one of the 100 simulations we ran.

The following chart represents the mean results of the 100 simulations. Each curve respectively represents the evolution of the number of susceptible, exposed, infected, recovered and dead individuals in time.

The following histograms show the distribution of susceptible, recovered and dead individuals at the end of each of the 100 simulations.

In a no-lockdown and no-quarantine scenario, the impact of the epidemic on the population is maximal.

The average number of susceptible individuals, that is to say individuals that were never exposed to the virus, is only 18.2. The average number of people who died because of the virus is 2.2.

The number of infected people peaks to a high value (around 12% of the population is infected at the same time), which risks overwhelming health infrastructures and making the management of the epidemic even more challenging. It is noteworthy that in practice, the fatality rate increases when health infrastructures are overwhelmed: we chose not to model this particular factor, which is an area of improvement in our model.

N.B.: From now on and until the end of the article, we choose to mainly focus on the variation in the number of susceptible individuals at the end of the epidemic in order to assess the impacts of the containment strategies. The different states (Susceptible, Exposed, Infected, Quarantined, Recovered and Dead) are communicating vessels in our model: an increase in the number of susceptible individuals at the end of the epidemic directly implies a decrease in the sum of recovered and dead individuals. As our focus is studying the virus spread rather than predicting the number of deaths, we consider this to be the most relevant criterion.

2.1.2. Impact of the lockdown duration

Let us simulate the enforcement of a 2-week lockdown, still without any quarantine strategy. For these simulations, we set the lockdown threshold to 30, the lockdown being enforced as soon as 30 individuals (that is to say 30% of the whole population of 100 individuals) have been infected (cumulative number since the start of the epidemic).

In the chart below, the lockdown period is highlighted in grey.

The following histograms superpose the distribution of susceptible, recovered and dead individuals at the end of each of the 100 simulations, for simulations without lockdown and with a 2-week lockdown.

The results for the 2-week lockdown simulations are better than for the no-lockdown simulations. The number of susceptible individuals in the population, that is to say individuals who have not been exposed to the virus, increases by 54%. The peak of infected people is flattened out, and the number of dead people is reduced by 16%.

Let us now evaluate the impact of different lockdown durations.

The following charts show simulations results for different lockdown durations ranging from 0 to 12 weeks.

The results get better with a longer lockdown. The mean number of susceptible individuals never exposed to the virus gets higher, and the mean number of deaths decreases.

The following histograms compare a no-lockdown scenario to a 12-week lockdown scenario.

The results for the 12-week lockdown simulations are significantly better than for the no-lockdown simulations.

KEY TAKEAWAY 1: even when enforced late (30% of individuals already infected), lockdown strategies decrease the number of individuals exposed to the virus and flatten out the peak of infected people. The number of susceptible individuals who have not been exposed to the virus at the end of the epidemic is more than doubled in our small-scale simulations for a 3-month lockdown compared to the no-lockdown simulations, and the number of deaths is reduced by around 27%.

KEY TAKEAWAY 2: longer lockdown strategies are effective to reduce the number of exposed individuals. However, above a certain lockdown duration, we observe diminishing returns. In our small-scale simulations, the first month of lockdown is very significant (183% increase in susceptible individuals compared to no-lockdown simulations), then the returns decrease sharply after the first 6 weeks of lockdown, stabilizing close to a 225% increase in susceptible individuals.

What if we enforce lockdown earlier?

2.1.3. Impact of the lockdown threshold

Let us simulate the enforcement of a lockdown as soon as respectively 5, 15 and 30 individuals are infected, still without any quarantine strategy. Simulations are run for different lockdown durations ranging between 2 and 12 weeks. Below are the results obtained for lockdown durations of 2 and 6 weeks.

This graph highlights the importance of enforcing the lockdown as soon as possible, that is to say before a high number of people are infected by the virus. The number of susceptible individuals is increased by 37% with an early-lockdown strategy (lockdown threshold of 5 individuals), compared to a late-lockdown (lockdown threshold of 30 individuals).

The number of susceptible individuals at the end of the epidemic in the early (threshold of 5 individuals) 2-week lockdown scenario is very close to the number of susceptible individuals in the late (threshold of 30 individual) 4-week lockdown.

If the lockdown is enforced very late (threshold of 30 infected individuals), a particularly long lockdown duration of 12 weeks only increases by 12% the number of susceptible individuals compared to an early 2-week lockdown.

In other words, the later the lockdown is enforced (corresponding to a high lockdown threshold), the longer it should be in order to be efficient. The efficiency of extending the lockdown duration converges towards a limit, as expected because of the diminishing returns mentioned above.

KEY TAKEAWAY 3: enforcing the lockdown earlier reduces the number of individuals exposed to the virus. The effect gets even stronger as the lockdown duration increases. The later the lockdown is enforced, the longer it should be in order to be efficient: a 2-week lockdown enforced early (as soon as 5 individuals out of 100 are infected) has similar effects as a 4-week lockdown enforced late (as soon as 30 individuals out of 100 are infected). Due to diminishing returns, an important delay in lockdown enforcement is very tough — if not impossible — to catch up simply by extending the lockdown duration.

What are the results in a scenario where a 3-month lockdown is enforced as soon as 5 individuals are infected?

In this scenario, the impact of the epidemic is drastically reduced. The corresponding simulations lead to an average of 71.7 susceptible individuals (vs. 18.2 in the no-lockdown scenario), 27.5 recovered individuals (vs. 79.6) and 0.8 dead individuals (vs. 2.2) at the end of the simulation.

What about quarantine then?

2.1.4. Impact of the testing strategy

Let us model the action of randomly carrying out a given number of virus detection tests every day, among the individuals with symptoms close to the COVID-19 ones. Similarly to the lockdown strategies, we assume that the tests do not start before a given threshold (quarantine threshold) of infected individuals is reached.

Each test may be positive (i.e. the tested individual is either exposed or infected) or negative. We consider that the probability for a test to be positive and result in a quarantine of the tested individual is p[quarantine] = 0.3. This probability roughly corresponds to what we observe in France. Indeed, 10k tests are currently carried out every day among the population, and approximately 3k individuals are detected as having COVID-19. We assume that an individual tested positively to COVID-19 is put in full lockdown until the end of the infection time, thus not risking exposing anyone else to the virus.

Let us start by assuming that the quarantine threshold of infected individuals to be reached before the tests start to be implemented is 30 individuals, and that 1 test is then carried out every day among the symptomatic individuals. For a country like France, that would mean running roughly 700k tests per day instead of the current 10k ones! However, running less than one test a day is inefficient in our simulated population of 100 individuals, given the small number of infected individuals at any time.

The strategy is quite effective, though not miraculous. Without any testing and quarantine policy, the average number of susceptible individuals at the end of the epidemic was 18.2 susceptible individuals, whereas 1 test per day increases this number to 31.9.

Let us now compare the results corresponding to different numbers of daily tests.

KEY TAKEAWAY 4: a testing policy, combined with a quarantine for the positively tested individuals, reduces the spread of the virus. The more tests available each day, the greater the reduction. Starting to run 1 test every day among symptomatic individuals when 30 individuals have been infected increases the number of susceptible individuals at the end of the epidemic by 75% and reduces the number of deaths by 20%. Meanwhile, running 5 tests a day nearly triples the number of susceptible individuals, and reduces the number of deaths by more than 40%. However, considering the practical difficulties of implementing massive testing of the individuals, quarantine strategies may not be as efficient as lockdown strategies.

Let us study the impact of the quarantine threshold, that is to say the number of infected individuals to be reached before the testing policy is enforced.

With only one test available on a daily basis, a quarantine strategy is efficient only if the quarantine threshold is very low (5 infected individuals), that is to say if the testing policy is enforced very early. The number of susceptible individuals then increases by roughly 60% compared to quarantine policies enforced later (quarantine thresholds of 15 or 30).

Let us check how the epidemic would evolve should more tests be available.

With 5 tests available on a daily basis, a 15-individual threshold allows a 25% increase in susceptible individuals compared to a 30-individual threshold. A 5-individual threshold is optimal, with a 65% increase in susceptible individuals compared to the 30-individual threshold.

KEY TAKEAWAY 5: testing policies are all the more efficient when enforced early, and the gap seems to widen as the number of tests available on a daily basis increases. A threshold of 5 infected individuals, when compared to a threshold of 30 infected individuals, increases the number of susceptible individuals at the end of the epidemic by 60% for a 1-test policy, and by 65% for a 5-test policy.

What if we combine quarantine strategies and lockdown strategies?

2.1.5. Combined impact of quarantine and lockdown strategies

Let us combine lockdown and testing strategies with different configurations. The probability of each test resulting in a quarantine still is p[quarantine] = 0.3.

First, we compare all combinations of no lockdown, 1-month lockdown, 0 daily test and 1 daily test for a late-lockdown (threshold of 30 infected individuals).

The 1-month lockdown — no test scenario is a bit more efficient than the 1 daily test — no lockdown one, but not by much (20% increase). However, the combination of the 1-month lockdown and the 1 daily test strategies gives promising results, nearly tripling the number of susceptible individuals at the end of the epidemic.

What would happen if we were to enforce the lockdown and tests earlier?

Interestingly, the gap between the 1-month lockdown — no test scenario and the no lockdown — 1 daily test scenario seems to be narrowing. Both are more efficient when enforced earlier. The combined strategy leads to the best results we ever observed in all our simulations.

Even though the number of daily available tests is small compared to the population size, a virtuous circle is observed. Indeed, the lockdown slows down the spread of the epidemic. This offers more time to test the population and quarantine the infected individuals, thus containing the epidemic even further.

KEY TAKEAWAY 6: combining a testing policy with a lockdown creates a virtuous circle, especially when enforced early. As the lockdown slows down the spread of the epidemic, more individuals can be tested and quarantined, further containing the virus. Compared to the worst-case scenario without any lockdown or testing, the number of susceptible individuals after the epidemic nearly triples when a 1-month late lockdown (threshold of 30 infected individuals) is combined with a 1 daily test policy. If enforced early (5-individual threshold), such combined strategy more than quadruples this number.

2.2. Scaling simulations to large graphs

We ran our simulation on a much larger population: a million-node graph, which could for instance represent a big city, such as Marseille in France. Some phenomena are easier to spot with larger graphs, the pool of healthy individuals being much more important.

We ran only one simulation per parameter set instead of 100, both for computing reasons and because the chances the spread will converge towards a common edge-state are high with large graphs. We chose the following parameters:

  • Total population size: 1,000,000;
  • Mean number of family members per individual: 3;
  • Lockdown factor applied to the transmission probability for family members: K[family_when_lockdown] = 5;
  • Mean number of acquaintances per individual (incl. family): 30;
  • Mean number of unknown people by individual: 300;
  • Lockdown factor applied to the transmission probability for unknown individuals: K[unknown_when_lockdown] = 1/10.

We also compute the transmission probability so that each infected individual in average exposes three other individuals to the virus without lockdown, ensuring that:

  • p[acquaintances] = p[family] / 5;
  • p[unknown] = p[family] / 100.

2.2.1. Worst-case simulation: no lockdown — no quarantine

Let us first check what happens in a no lockdown — no quarantine scenario.

The peak number of infected individuals in the large graph simulation occurs at a much later date than in the previous simulations with smaller graphs (day 133 instead of day 36), as there are many more individuals to infect to reach it. It is also noteworthy that the peak has the same intensity, i.e. 12% of the whole population, than in the case of smaller graphs.

2.2.2. Impact of the lockdown duration

Let us now simulate lockdowns of different durations. The lockdowns occur when 15k individuals are infected, which is quite late (it corresponds roughly to the lockdown enforcement in France).

As illustrated on the graph above, a longer lockdown does not significantly reduce the number of people who have been exposed to the virus at the end of the epidemic (a 3-month lockdown only avoids virus exposure to 2,211 more individuals, that is to say 0.2% of the population).

The following graphs illustrate the evolution over time of the number of susceptible, exposed, infected, recovered and dead individuals, with different lockdown durations.

With a 1-month lockdown:

And with a 3-month lockdown:

A quick look at the graphs shows that a longer lockdown delays the full extinction of the epidemic, that is to say when there is no more exposed or infected individual. The end of the epidemic occurs around day 650 for a 1-month lockdown, but only around almost day 1600 for a 3-month lockdown! The number of dead individuals is not much reduced by a longer lockdown, decreasing by nearly 1%. Not so convincing, might we think…

However, on closer inspection, we observe that the lockdown divides the spread of the epidemic into two peaks of infected individuals, the two peaks being spaced at least by the lockdown duration. The longer the lockdown, the more the peaks are flattened and spaced.

By contrast, without any lockdown, only one peak of infected people is observed, the single peak being higher than either of the two peaks observed in the scenarios with lockdowns.

In other words, a lockdown strategy stretches in time the number of people who are likely to require medical assistance in order to recover.

This is exactly what governments are looking for! In real world, countries struggle to provide intensive care and hospital beds to symptomatic individuals, and everything that may ease their burden is a life-saving improvement.

KEY TAKEAWAY 7: in large graphs with a greater pool of susceptible individuals, lockdown strategies tend to divide the spread of the epidemic into two peaks of infected individuals. Each of the two peaks is significantly reduced compared to the no-lockdown single peak of approximately 200k infected individuals (15% reduction for a 3-month lockdown, considering the highest of the two peaks). Lockdown strategies can thus make a real difference for the crisis management in health care facilities.

2.2.3. Impact of the testing strategy

What about the testing policy which is currently enforced in France? Running 10k tests per day on 67M individuals with a quarantine probability of 0.3 is equivalent to running 150 tests per day on 1M individuals.

It seems quite inefficient, as the small differences noted compared to the worst-case scenario are most probably due only to the randomization of simulations.

What if 750 tests are carried out per day instead of 150? This would correspond to the French government’s plan to increase the number of daily tests from 10k to 50k.

It is already more efficient, as the number of susceptible individuals increases by 12% and the peak intensity decreases by almost 3%. But not efficient enough yet.

2.2.4. Combined impact of quarantine and lockdown strategies

As we did for small graphs, let us combine lockdown and testing strategies — still enforced with a threshold of 15k infected individuals and p[quarantine] = 0.3.

Interestingly, it seems that a 3-month lockdown, when combined to 150 daily tests, is enough to contain the epidemic quickly (before day 180) and to signigicantly limit the number of individuals exposed to the virus.

Let us zoom on this 3-month lockdown and 150 daily tests scenario:

Less than 7% of individuals are exposed to the virus, and the peak of infected individuals only amounts to 2% of the whole population. The total number of deaths is divided by more than 13 compared to the worst-case scenario. The initial peak is reduced by almost 85% even though it is moved up to day 92 instead of day 133. Moreover, contrarily to the no-quarantine scenario, no second peak is observed.

What happens with 750 tests a day?

It seems that, with 750 daily tests, a 2-month lockdown is enough to contain the epidemic before day 160.

Let us zoom on this 2-month lockdown and 750 daily tests scenario:

That is impressive: the epidemic is fully contained quickly, and with similar results compared to the 3-month lockdown — 150 daily tests scenario.

KEY TAKEAWAY 8: in large graph simulations, the combination of early lockdown and quarantine strategies remains the optimal way to deal with the epidemic. The virtuous circle observed in small graph simulations scales up. A 3-month lockdown, when combined with 150 daily tests, is enough to fully contain the epidemic in less than 6 months. By increasing the daily testing capacity to 750, a 2-month lockdown becomes enough to reach similar results in about 5 months. In both cases, no second wave of infected individuals is observed.

3. Extensions of the model

We have identified the following ideas to explore next:

  • It would be interesting to take into account the age distribution of the population, or other criteria that may influence the epidemic: incubation time, infection time, fatality rate, etc.
  • It would also be worthwhile to model the fact that the fatality rate probably increases with the number of infected individuals above a certain threshold, due to the saturation of national health systems. So does the probability of testing, as the awareness of the medical community and governments increase. These two factors would influence the model in opposite directions.
  • Finally, we chose to run our simulations on a maximum population of 1M individuals. Running them on a population similar to that of a country like France (with 67M individuals) would surely prove to be very instructive.

Conclusion

We have highlighted a few key takeaways throughout the article. Our model, which we have designed solely for pedagogical purposes, has demonstrated the following conclusions:

  • Lockdown strategies are effective in reducing the impact of the virus, especially when enforced early. The later the lockdown is enforced, the longer it should be in order to be effective.
  • Longer lockdown strategies are more effective, but with diminishing returns. Therefore, an important delay in lockdown enforcement is very tough — if not impossible — to catch up by simply extending the lockdown duration.
  • When the population is large, lockdown strategies tend to divide the spread of the epidemic into two peaks of infected individuals. Each of the two peaks is significantly reduced compared to the no-lockdown single peak. This can make a real difference for the crisis management in health care facilities.
  • A testing policy, with a quarantine for the positively tested individuals, is also effective in reducing the impact of the virus, especially when enforced early.
  • Testing strategies are all the more efficient as the daily testing capacity is high. However, considering the practical difficulties raised by implementing massive testing of the individuals, quarantine strategies alone may not be as efficient as lockdown strategies.

Finally, combining a testing policy with a lockdown strategy seems to be the optimal course of action to deal with the epidemic.

For our large graph simulation, a 3-month lockdown with a strategy of 150 daily tests (out of a population of 1 million individuals) is enough to fully contain the epidemic in less than 6 months with impressive results. By increasing the daily testing capacity to 750, a 2-month lockdown becomes enough to reach similar results in about 5 months. In both cases, no second epidemic wave is observed.

Combining a testing policy with a lockdown strategy creates a virtuous circle: as the lockdown slows down the spread of the epidemic, more individuals can be tested and quarantined, further containing the virus.

I would like to thank my wife Pauline who, besides putting up with a few long evenings of epidemic modeling, greatly improved this article with her talent for explaining complicated things clearly as well as her meticulous proofreading.

I would also like to thank Georgina HALL, Assistant Professor of Decision Sciences at INSEAD, Olivier HAMON, CTO of Citalid, as well as my brother-in-law Romain COSSON. Discussing mathematics with them is always a pleasure full of teachings: they contributed a lot to this article with their ideas, support and proofreading.

Appendices

For a summary table of all the simulations, please visitthe original post at https://www.citalid.com/blog/covid-19-containing-the-epidemic/

Originally published at https://citalid.com on April 9, 2020.

--

--